Data Synthesis with Expectation-Maximization
نویسنده
چکیده
A problem of increasing importance in computer graphics is to generate data with the style of some previous training data, but satisfying new constraints. If we use a probabilistic latent variable model, then learning the model will normally be performed using Expectation-Maximization (EM), or one of its generalizations. We show that data synthesis for such problems can also be performed using EM, and that this synthesis process closely parallels learning, including identical E-step algorithms. This observation simplifies the process of developing synthesis algorithms for latent variable models.
منابع مشابه
Analysis of Sampling-based Texture Synthesis as a Generalized EM Algorithm
Research on texture synthesis has made substantial progress in recent years, and many patch-based sampling algorithms now produce quality results in an acceptable computation time. However, when such algorithms are applied to textures, whether they provide good results for specific textures, and why they do so, are questions that have yet to be fully resolved. In this paper, we deal specificall...
متن کاملGenerating intonation from a mixed CART-HMM model for speech synthesis
This paper proposes two algorithms for generating intonation from a mixed CART-HMM intonation model for speech synthesis. Based either on a Viterbi search or on the Expectation-Maximization algorithm, the two generation algorithms are analyzed in terms of likelihood and F0 Root Mean Square Error. Listening tests are performed to subjectively evaluate the quality of the generated intonation.
متن کاملSemi-parametric estimation of the change-point of mean value of non-gaussian random sequences by polynomial maximization method
Abstrac tAn application of the maximization technique in the synthesis of polynomial adaptive algorithms for a posterior (retrospective) estimation of the change-point of the mean value of random sequences is presented. Statistical simulation shows a significant increase in the accuracy of polynomial estimates, which is achieved by taking into account the non-Gaussian character of statistical d...
متن کاملMultifactor Expectation Maximization for Factor Graphs
Factor graphs allow large probability distributions to be stored efficiently and facilitate fast computation of marginal probabilities, but the difficulty of training them has limited their use. Given a large set of data points, the training process should yield factors for which the observed data has a high likelihood. We present a factor graph learning algorithm which on each iteration merges...
متن کاملNon-parametric expectation maximization: a learning automata approach
The famous Expectation Maximization technique suffers two major drawbacks. First, the number of components has to be specified apriori. Also, the Expectation Maximization is sensitive to initialization. In this paper, we present a new stochastic technique for estimating the mixture parameters. Parzen Window is used to estimate a discrete estimate of the PDF of the given data. Stochastic Learnin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004